30 research outputs found

    Identification of Dynamics, Parameters and Synaptic Inputs of a Single Neuron using Bayesian Approach

    Get PDF
    Revealing dynamical mechanisms of the brain in order to understand how it works and processes information has recently stimulated enormous interest in computational neuroscience. Understanding the behavior of a single neuron, the most important building block of the brain, is of core interest in the brain-related sciences. Application of the advanced statistical signal processing methods, e.g., Bayesian methods, in assessing the hidden dynamics and estimating the unknown parameters of a single neuron has been considered recently as of special interest in neuroscience. This thesis attempts to develop robust and efficient computational techniques based on Bayesian signal processing methods to elucidate the hidden dynamics and estimate the unknown parameters of a single neuron. In the first part of the thesis, Kalman filtering (KF)-based algorithms are derived for the Hodgkin-Huxley (HH) neuronal model, the most detailed biophysical neuronal model, to identify the hidden dynamics and estimate the intrinsic parameters of a single neuron from a single trace of the recorded membrane potential. The unscented KF (UKF) has already been applied to track the dynamics of the HH neuronal model in the literature. We extend the existing KF technique for the HH neuronal model to another version, namely, extended Kalman filtering (EKF). Two estimation strategies of the KF, dual and joint estimation strategies, are employed in conjunction with the EKF and UKF for simultaneously tracking the hidden dynamics and estimating the unknown parameters of a single neuron, leading to four KF algorithms, namely, joint UKF (JUKF), dual UKF (DUKF), joint EKF (JEKF) and dual EKF (DEKF). In the second part of this thesis, the problem of inferring excitatory and inhibitory synaptic inputs that govern activity of neurons and process information in the brain is investigated. The importance of trial-to-trial variations of synaptic inputs has recently been investigated in neuroscience. Such variations are ignored in the most conventional techniques because they are removed when trials are averaged during linear regression techniques. Here, we propose a novel recursive algorithm based on Gaussian mixture Kalman filtering for estimating time-varying excitatory and inhibitory synaptic inputs from single trials of noisy membrane potential. Unlike other recent algorithms, our algorithm does not assume an a priori distribution from which the synaptic inputs are generated. Instead, the algorithm recursively estimates such a distribution by fitting a Gaussian mixture model. Moreover, a special case of the GMKF when there is only one mixand, the standard KF, is studied for the same problem. Finally, in the third part of the thesis, inferring the synaptic input of a spiking neuron as well as estimating its dynamics and parameters is considered. The synaptic input underlying a spiking neuron can effectively elucidate the information processing mechanism of a neuron. The concept of blind deconvolution is applied to Hodgkin-Huxley (HH) neuronal model, for the first time in this thesis, to address the problem of reconstructing the hidden dynamics and synaptic input of a single neuron as well as estimating its intrinsic parameters only from a single trace of noisy membrane potential. The blind deconvolution is accomplished via a novel recursive algorithm based on extended Kalman filtering (EKF). EKF is then followed by the expectation-maximization (EM) algorithm which estimates the statistical parameters of the HH neuronal model. Extensive experiments are performed throughout the thesis to demonstrate the accuracy, effectiveness and usefulness of the proposed algorithms in our investigation. The performance of the proposed algorithms is compared with that of the most recent techniques in the literature. The promising results of the proposed algorithms confirm their robustness and efficiency, and suggest that they can be effectively applied to the challenging problems in neuroscience

    Multiplexed coding through synchronous and asynchronous spiking

    Full text link

    Simultaneous Bayesian Estimation of Excitatory and Inhibitory Synaptic Conductances by Exploiting Multiple Recorded Trials

    No full text
    Advanced statistical methods have enabled trial-by-trial inference of the underlying excitatory and inhibitory synaptic conductances of the membrane potential recordings. Simultaneous inference of both excitatory and inhibitory conductances sheds light into the neural circuits underlying the neural activity and advances our understanding on neural information processing. Conventional Bayesian methods can infer excitatory and inhibitory synaptic conductances based on a single trial of observed membrane potential. However, if multiple recorded trials are available, this typically leads to suboptimal estimation because they neglect common statistics (of synaptic inputs) across trials. Here, we establish a new expectation maximization (EM) algorithm that improves such single-trial Bayesian methods by exploiting multiple recorded trials to extract common synaptic input statistics across the trials. In this paper, the proposed EM algorithm is embedded in parallel Kalman filters (KFs) and Particle filters (PFs) for multiple recorded trials to integrate their outputs to iteratively update the common synaptic input statistics. These statistics are then used to infer the excitatory and inhibitory synaptic conductances of individual trials. We demonstrate the superior performance of these multiple-trial Kalman Filter (MtKF) and Particle Filter (MtPF) methods relative to the corresponding single-trial methods. While relative estimation error of excitatory and inhibitory conductances is known to depend on the level of current injection into a cell, our numerical simulations using MtKF show that both excitatory and inhibitory condutances are reliably inferred using an optimal level of current injection. Finally, we validate the robustness and applicability of our technique through simulation studies and apply the MtKF algorithm to in-vivo data recorded from rat barrel cortex

    Uncovering the Origins of Instability in Dynamical Systems: How Can the Attention Mechanism Help?

    No full text
    The behavior of the network and its stability are governed by both dynamics of the individual nodes, as well as their topological interconnections. The attention mechanism as an integral part of neural network models was initially designed for natural language processing (NLP) and, so far, has shown excellent performance in combining the dynamics of individual nodes and the coupling strengths between them within a network. Despite the undoubted impact of the attention mechanism, it is not yet clear why some nodes of a network obtain higher attention weights. To come up with more explainable solutions, we tried to look at the problem from a stability perspective. Based on stability theory, negative connections in a network can create feedback loops or other complex structures by allowing information to flow in the opposite direction. These structures play a critical role in the dynamics of a complex system and can contribute to abnormal synchronization, amplification, or suppression. We hypothesized that those nodes that are involved in organizing such structures could push the entire network into instability modes and therefore need more attention during analysis. To test this hypothesis, the attention mechanism, along with spectral and topological stability analyses, was performed on a real-world numerical problem, i.e., a linear Multi-Input Multi-Output state-space model of a piezoelectric tube actuator. The findings of our study suggest that the attention should be directed toward the collective behavior of imbalanced structures and polarity-driven structural instabilities within the network. The results demonstrated that the nodes receiving more attention cause more instability in the system. Our study provides a proof of concept to understand why perturbing some nodes of a network may cause dramatic changes in the network dynamics

    Gradient-Free Neural Network Training via Synaptic-Level Reinforcement Learning

    No full text
    An ongoing challenge in neural information processing is the following question: how do neurons adjust their connectivity to improve network-level task performance over time (i.e., actualize learning)? It is widely believed that there is a consistent, synaptic-level learning mechanism in specific brain regions, such as the basal ganglia, that actualizes learning. However, the exact nature of this mechanism remains unclear. Here, we investigate the use of universal synaptic-level algorithms in training connectionist models. Specifically, we propose an algorithm based on reinforcement learning (RL) to generate and apply a simple biologically-inspired synaptic-level learning policy for neural networks. In this algorithm, the action space for each synapse in the network consists of a small increase, decrease, or null action on the connection strength. To test our algorithm, we applied it to a multilayer perceptron (MLP) neural network model. This algorithm yields a static synaptic learning policy that enables the simultaneous training of over 20,000 parameters (i.e., synapses) and consistent learning convergence when applied to simulated decision boundary matching and optical character recognition tasks. The trained networks yield character-recognition performance comparable to identically shaped networks trained with gradient descent. The approach has two significant advantages in comparison to traditional gradient-descent-based optimization methods. First, the robustness of our novel method and its lack of reliance on gradient computations opens the door to new techniques for training difficult-to-differentiate artificial neural networks, such as spiking neural networks (SNNs) and recurrent neural networks (RNNs). Second, the method’s simplicity provides a unique opportunity for further development of local information-driven multiagent connectionist models for machine intelligence analogous to cellular automata

    A Time-Varying Information Measure for Tracking Dynamics of Neural Codes in a Neural Ensemble

    No full text
    The amount of information that differentially correlated spikes in a neural ensemble carry is not the same; the information of different types of spikes is associated with different features of the stimulus. By calculating a neural ensemble’s information in response to a mixed stimulus comprising slow and fast signals, we show that the entropy of synchronous and asynchronous spikes are different, and their probability distributions are distinctively separable. We further show that these spikes carry a different amount of information. We propose a time-varying entropy (TVE) measure to track the dynamics of a neural code in an ensemble of neurons at each time bin. By applying the TVE to a multiplexed code, we show that synchronous and asynchronous spikes carry information in different time scales. Finally, a decoder based on the Kalman filtering approach is developed to reconstruct the stimulus from the spikes. We demonstrate that slow and fast features of the stimulus can be entirely reconstructed when this decoder is applied to asynchronous and synchronous spikes, respectively. The significance of this work is that the TVE can identify different types of information (for example, corresponding to synchronous and asynchronous spikes) that might simultaneously exist in a neural code

    Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition

    No full text
    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding

    Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition

    No full text
    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding
    corecore